Snegirev Yuriy Vladimirovich, Postgraduate student, Magnitogorsk State Technical University (Magnitogorsk, 38 Lenin Avenue), firstname.lastname@example.org
Tutarova Vlasta Dilyaurovna, Candidate of engineering sciences, associate professor, sub-department of computer science and applied mathematics, Magnitogorsk State Technical University (Magnitogorsk, 38 Lenin Avenue), email@example.com
The article considers the means of parallel computing organization on a computer. It distinguishes parallel and pipeline data processing and describes in details the mechanisms of data exchange between concurrent run tasks. The authors point out three main methods of intertask data exchange: on the basis of shared memory; message passing; promise mechanisms. The mechanism of shared memory is very simple in realization, has a high speed of operation, however, features certain problems: reace condition and deadlocks. The study describes the methods of concurrent run tasks syncronisation: mutexes, semaphores and various monitors. It
should be noted that the intertask exchange of data in most of industrially implemented programming field is realized by this method. The article describes various realizations of intertask data exchange on the basis of message passing. According to opinion of the first researchers of this mechanism, it rather based on physics, than on mathematical logic, set theory, algebra and other mathematical disciplines. Data exchange may be carried out synchronously and asynchronously. Specific attention is paid to actor model as to the most promising and deeply theoretically and practically worked through method. The article also describes other examples of intertask exchange organization on the basis of message passing: amorphous computing, data flow programming and SOAP. The article describes two types of intertask data exchange on the basis of promise mechanism: explicit and implicit. For all the described means of parallel computing organization the authors adduce the examples of program realization. For example, in pthreads – realization of multithreading for POSIX-compatible operational systems – shared memory is commonly used, and in Erlang language – actor model. As for promise mechanism there are realizations for Java, LISP and Haskell. The researchers conclude that the method of parallel computing organization should be selected by the following criteria: the problem to be solved; the used programming environment; possibility or impossibility of combining several methods at once.
concurrent computing, process, thread, mutex, semaphore, actor, pipeline.
1. Voevodin V. V., Voevodin Vl. V. Parallel'nye vychisleniya [Parallel computing]. Saint Petersburg: BKhV-Peterburg, 2002, 608 p.
2. Per Brinch Hansen. The Invention of Concurrent Programming. New York: Springen Verlag, 2001.
3. Dijkstra E. W. Cooperating sequential processes Technological University. Eindhoven, The Netherlands, 1965.
4. Dijksta E. W. Hierarchical Ordering of Sequential Processes. Eindhoven. The Netherlands. Technological University, 1971.
5. Per Brinch Hansen. The Architecture of Concurrent Programs. New Jersey, Prentice Hall, 1997.
6. Buhr P. A. Frontier M. Dept. of Computer Science, University of Waterloo. Waterloo, Ontario, Canada, 1995.
7. Horstmann C. S., Cornell G. Core Java Volume I. Fundamentals. New Jersey, Prentice Hall, 2007.
8. Hewitt C. Bishop Peter, Steiger Richard A Universal Modular Actor Formalism for Artificial Intelligence. IJCAI, 1973.
9. Harold Abelson, Thomas F. Knight, Gerald Jay Sussman, and friends Amorphous Computing. Communications of the ACM, May 2000. available at: http://www.swiss. ai.mit.edu/projects/amorphous/cacm-2000.html.
10. Morrison J. Paul A New Approach to Application Development. 2nd edition. New Jersey, CreateSpace, 2010.
11. Friedman D., Wise D. International Conference on Parallel Processing, 1976, pp. 263–272.